86 research outputs found

    Hybridization of evolutionary algorithms and local search by means of a clustering method

    Get PDF
    This paper presents a hybrid evolutionary algorithm (EA) to solve nonlinear-regression problems. Although EAs have proven their ability to explore large search spaces, they are comparatively inefficient in fine tuning the solution. This drawback is usually avoided by means of local optimization algorithms that are applied to the individuals of the population. The algorithms that use local optimization procedures are usually called hybrid algorithms. On the other hand, it is well known that the clustering process enables the creation of groups (clusters) with mutually close points that hopefully correspond to relevant regions of attraction. Local-search procedures can then be started once in every such region. This paper proposes the combination of an EA, a clustering process, and a local-search procedure to the evolutionary design of product-units neural networks. In the methodology presented, only a few individuals are subject to local optimization. Moreover, the local optimization algorithm is only applied at specific stages of the evolutionary process. Our results show a favorable performance when the regression method proposed is compared to other standard methods

    Memetic Pareto Evolutionary Artificial Neural Networks for the determination of growth limits of Listeria Monocytogenes

    Get PDF
    The main objective of this work is to automatically design neural network models with sigmoidal basis units for classification tasks, so that classifiers are obtained in the most balanced way possible in terms of CCR and Sensitivity (given by the lowest percentage of examples correctly predicted to belong to each class). We present a Memetic Pareto Evolutionary NSGA2 (MPENSGA2) approach based on the Pareto-NSGAII evolution (PNSGAII) algorithm. We propose to augmente it with a local search using the improved Rprop—IRprop algorithm for the prediction of growth/no growth of L. monocytogenes as a function of the storage temperature, pH, citric (CA) and ascorbic acid (AA). The results obtained show that the generalization ability can be more efficiently improved within a framework that is multi-objective instead of a within a single-objective one

    Algoritmos de aprendizaje evolutivo y estadístico para la determinación de mapas de malas hierbas utilizando técnicas de teledetección

    Get PDF
    Este trabajo aborda la resolución de problemas de clasificación binaria utilizando una metodología híbrida que combina la regresión logística y modelos evolutivos de redes neuronales de unidades producto. Para estimar los coeficientes del modelo lo haremos en dos etapas, en la primera aprendemos los exponentes de las funciones unidades producto, entrenando los modelos de redes neuronales mediante computación evolutiva y una vez estimados el número de funciones potenciales y los exponentes de estas funciones, se aplica el método de máxima verosimilitud al espacio de características formado por las covariables iniciales junto con las nuevas funciones de base obtenidas al entrenar los modelos de unidades producto. Esta metodología híbrida en el diseño del modelo y en la estimación de los coeficientes se aplica a un problema real agronómico de predicción de presencia de la mala hierba Ridolfia segetum Moris en campos de cosecha de girasol. Los resultados obtenidos con este modelo mejoran los conseguidos con una regresión logística estándar en cuanto a porcentaje de patrones bien clasificados sobre el conjunto de generalización

    Regresión no lineal mediante la evolución de modelos Híbridos de Redes Neuronales

    Get PDF
    El presente trabajo es una primera aproximación a la formación de modelos de redes neuronales con unidades ocultas de tipo híbrido (sigmoides, producto) que siendo aproximadores universales, puedan utilizarse como modelos no lineales de regresión cuando las características del espacio de las variables independientes lo aconsejen. Dada la dificultad que presenta la aplicación de algoritmos de aprendizaje de búsqueda local para esta tipología de modelos, se utiliza un algoritmo de programación evolutiva donde se definen operadores de mutación específicos. Los experimentos realizados con cuatro funciones de prueba, las tres funciones de Friedman y una propuesta por los autores, muestran resultados muy prometedores en esta direcció

    Cooperative coevolution of artificial neural network ensembles for pattern classification

    Get PDF
    This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller

    Projection based ensemble learning for ordinal regression

    Get PDF
    The classification of patterns into naturally ordered labels is referred to as ordinal regression. This paper proposes an ensemble methodology specifically adapted to this type of problems, which is based on computing different classification tasks through the formulation of different order hypotheses. Every single model is trained in order to distinguish between one given class (k) and all the remaining ones, but grouping them in those classes with a rank lower than k, and those with a rank higher than k. Therefore, it can be considered as a reformulation of the well-known one-versus-all scheme. The base algorithm for the ensemble could be any threshold (or even probabilistic) method, such as the ones selected in this paper: kernel discriminant analysis, support vector machines and logistic regression (all reformulated to deal with ordinal regression problems). The method is seen to be competitive when compared with other state-of-the-art methodologies (both ordinal and nominal), by using six measures and a total of fifteen ordinal datasets. Furthermore, an additional set of experiments is used to study the potential scalability and interpretability of the proposed method when using logistic regression as base methodology for the ensemble

    Error-Correcting Output Codes in the Framework of Deep Ordinal Classification

    Get PDF
    Automatic classification tasks on structured data have been revolutionized by Convolutional Neural Networks (CNNs), but the focus has been on binary and nominal classification tasks. Only recently, ordinal classification (where class labels present a natural ordering) has been tackled through the framework of CNNs. Also, ordinal classification datasets commonly present a high imbalance in the number of samples of each class, making it an even harder problem. Focus should be shifted from classic classification metrics towards per-class metrics (like AUC or Sensitivity) and rank agreement metrics (like Cohen’s Kappa or Spearman’s rank correlation coefficient). We present a new CNN architecture based on the Ordinal Binary Decomposition (OBD) technique using Error-Correcting Output Codes (ECOC). We aim to show experimentally, using four different CNN architectures and two ordinal classification datasets, that the OBD+ECOC methodology significantly improves the mean results on the relevant ordinal and class-balancing metrics. The proposed method is able to outperform a nominal approach as well as already existing ordinal approaches, achieving a mean performance of RMSE=1.0797 for the Retinopathy dataset and RMSE=1.1237 for the Adience dataset averaged over 4 different architectures

    Borderline kernel based over-sampling

    Get PDF
    Nowadays, the imbalanced nature of some real-world data is receiving a lot of attention from the pattern recognition and machine learning communities in both theoretical and practical aspects, giving rise to di erent promising approaches to handling it. However, preprocessing methods operate in the original input space, presenting distortions when combined with kernel classi ers, that operate in the feature space induced by a kernel function. This paper explores the notion of empirical feature space (a Euclidean space which is isomorphic to the feature space and therefore preserves its structure) to derive a kernel-based synthetic over-sampling technique based on borderline instances which are considered as crucial for establishing the decision boundary. Therefore, the proposed methodology would maintain the main properties of the kernel mapping while reinforcing the decision boundaries induced by a kernel machine. The results show that the proposed method achieves better results than the same borderline over- sampling method applied in the original input spac
    corecore